XBPM: Explainable Business Process Monitoring

The ExplainSE project explores how explainable AI techniques can be applied for predictive business process monitoring.


Initial Situation

Predictive business process monitoring predicts how an ongoing business process instance (aka. case) will unfold. To this end, predictive business process monitoring uses the sequence of events produced by the execution of a case to make predictions about the future state of the case. If the predicted future state of the case indicates a problem, the ongoing case may be proactively adapted, e.g., by re-scheduling process activities or by changing the assignment of resources. Proactive business process adaptation can prevent the occurrence of problems and it can mitigate the impact of upcoming problems during business process execution. As an example, a delay in the expected delivery time for a freight transport process may incur contractual penalties. If a delay is predicted, faster, alternative transport activities (such as air delivery instead of road delivery) can be scheduled to prevent the delay and thus the penalty.

Compared to simple prediction models, such as decision trees, more sophisticated prediction models such as random forests and artificial neural networks achieve consistently better prediction accuracy for various types of predictive process monitoring problems. However, the major drawback of these more sophisticated prediction models is their lack of interpretability. While random forests are in principle interpretable, the size of the ensembles (typically around 50–200 trees) makes interpretation difficult, thus rendering them practically as a black box. Artificial neural network models intrinsically appear as black boxes to developers and users, because it is hard to infer their logics given their many layers and high number of neurons and weights.

Using such black-box models without being able to interpret their decisions has potential risks. As an example, biased training data can lead to biased models and thereby biased decisions. Such risks entailed in using black-box models are not negligible if these models are used to make critical business decisions. In addition, the inability to understand the decision made by such black-box models limits the adoption of these models.

Solution Approach

To facilitate the interpretability of black-box models, research under the name of "Explainable AI" recently regained interest. In this project, we explore how such techniques may be applied and may have to be adapted for explaining predictive process monitoring.

Goals

Handling Process Constraints during Explanation

One main type of explainable AI applied to process monitoring is interpretable model induction (see the conceptual overview depicted below). Here, random samples from the black-box model are used to induce an interpretable model (such as a decision tree) that provides a locally faithful approximation of the black-box model. In turn, this interpretable model then serves as basis for explanations. However, these existing approaches generate random samples without considering existing process constraints. There typically are constraints in which order the activities in a business process may be executed imposed by the control flow. As a result, the existing process-constraint-agnostic approaches face the risk of generating unrealistic explanations that can never happen due to the process constraints. This XBPM project will augment existing interpretable model induction techniques to consider process constraints by augmenting the sampling process.

Generating Counterfactual Explanations

Current approaches for explainable process monitoring focus on generating explanations for why a concrete prediction was made. While these explanations can be helpful, humans typically do not ask why a concrete prediction was made, but why it was made instead of another prediction. This means humans are interested in counterfactual cases. As an added benefit, counterfactual explanations are often much simpler, as they focus on the important differences. The XBPM project will leverage decision-tree-based solutions for generating counterfactual explanations and extend them for their use in process monitoring.